alphafold 3
AlphaFold Changed Science. After 5 Years, It's Still Evolving
WIRED spoke with DeepMind's Pushmeet Kohli about the recent past--and promising future--of the Nobel Prize-winning research project that changed biology and chemistry forever. Amino acids "folded" to form a protein. Over the past few years, we've periodically reported on its successes; last year, it won the Nobel Prize in Chemistry . Until AlphaFold's debut in November 2020, DeepMind had been best known for teaching an artificial intelligence to beat human champions at the ancient game of Go Its work culminated in the compilation of a database that now contains over 200 million predicted structures, essentially the entire known protein universe, and is used by nearly 3.5 million researchers in 190 countries around the world The Nature article published in 2021 describing the algorithm has been cited 40,000 times to date. Last year, AlphaFold 3 arrived, extending the capabilities of artificial intelligence to DNA, RNA, and drugs.
- North America > United States > California > Los Angeles County > Los Angeles (0.04)
- Europe > Slovakia (0.04)
- Europe > Czechia (0.04)
- Health & Medicine > Pharmaceuticals & Biotechnology (1.00)
- Information Technology (0.95)
- Energy > Renewable > Geothermal (0.47)
- Leisure & Entertainment > Games > Go (0.35)
Molecular Embedding-Based Algorithm Selection in Protein-Ligand Docking
Wang, Jiabao Brad, Cao, Siyuan, Wu, Hongxuan, Yuan, Yiliang, Misir, Mustafa
Selecting an effective docking algorithm is highly context-dependent, and no single method performs reliably across structural, chemical, or protocol regimes. We introduce MolAS, a lightweight algorithm selection system that predicts per-algorithm performance from pretrained protein-ligand embeddings using attentional pooling and a shallow residual decoder. With only hundreds to a few thousand labelled complexes, MolAS achieves up to 15% absolute improvement over the single-best solver (SBS) and closes 17-66% of the Virtual Best Solver (VBS)-SBS gap across five diverse docking benchmarks. Analyses of reliability, embedding geometry, and solver-selection patterns show that MolAS succeeds when the oracle landscape exhibits low entropy and separable solver behaviour, but collapses under protocol-induced hierarchy shifts. These findings indicate that the main barrier to robust docking AS is not representational capacity but instability in solver rankings across pose-generation regimes, positioning MolAS as both a practical in-domain selector and a diagnostic tool for assessing when AS is feasible.
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- Asia > Middle East > Jordan (0.04)
- Asia > China > Jiangsu Province (0.04)
Pearl: A Foundation Model for Placing Every Atom in the Right Location
Genesis Research Team, null, Dobles, Alejandro, Jovic, Nina, Leidal, Kenneth, Murugan, Pranav, Williams, David C., Wulsin, Drausin, Gruver, Nate, Ji, Christina X., Pruegsanusak, Korrawat, Scarpellini, Gianluca, Sharma, Ansh, Swiderski, Wojciech, Bootsma, Andrea, Bowen, Richard Strong, Chen, Charlotte, Chen, Jamin, Dämgen, Marc André, DiFrancesco, Benjamin, Fishman, J. D., Ivanova, Alla, Kagin, Zach, Li-Bland, David, Liu, Zuli, Morozov, Igor, Ouyang-Zhang, Jeffrey, Pickard, Frank C. IV, Shah, Kushal S., Shor, Ben, da Silva, Gabriel Monteiro, Tal, Roy, Tessmer, Maxx, Tilbury, Carl, Vetcher, Cyr, Zeng, Daniel, Al-Shedivat, Maruan, Faust, Aleksandra, Feinberg, Evan N., LeVine, Michael V., Pan, Matteus
Accurately predicting the three-dimensional structures of protein-ligand complexes remains a fundamental challenge in computational drug discovery that limits the pace and success of therapeutic design. Deep learning methods have recently shown strong potential as structural prediction tools, achieving promising accuracy across diverse biomolecular systems. However, their performance and utility are constrained by scarce experimental data, inefficient architectures, physically invalid poses, and the limited ability to exploit auxiliary information available at inference. To address these issues, we introduce Pearl (Placing Every Atom in the Right Location), a foundation model for protein-ligand cofolding at scale. Pearl addresses these challenges with three key innovations: (1) training recipes that include large-scale synthetic data to overcome data scarcity; (2) architectures that incorporate an SO(3)-equivariant diffusion module to inherently respect 3D rotational symmetries, improving generalization and sample efficiency, and (3) controllable inference, including a generalized multi-chain templating system supporting both protein and non-polymeric components as well as dual unconditional/conditional modes. Pearl establishes a new state-of-the-art performance in protein-ligand cofolding. On the key metric of generating accurate (RMSD < 2 Å) and physically valid poses, Pearl surpasses AlphaFold 3 and other open source baselines on the public Runs N' Poses and PoseBusters benchmarks, delivering 14.5% and 14.2% improvements, respectively, over the next best model. In the pocket-conditional cofolding regime, Pearl delivers $3.6\times$ improvement on a proprietary set of challenging, real-world drug targets at the more rigorous RMSD < 1 Å threshold. Finally, we demonstrate that model performance correlates directly with synthetic dataset size used in training.
FlashBias: Fast Computation of Attention with Bias
Wu, Haixu, Guo, Minghao, Ma, Yuezhou, Sun, Yuanxu, Wang, Jianmin, Matusik, Wojciech, Long, Mingsheng
Attention with bias, which extends standard attention by introducing prior knowledge as an additive bias matrix to the query-key scores, has been widely deployed in vision, language, protein-folding and other advanced scientific models, underscoring its status as a key evolution of this foundational module. However, introducing bias terms creates a severe efficiency bottleneck in attention computation. It disrupts the tightly fused memory-compute pipeline that underlies the speed of accelerators like FlashAttention, thereby stripping away most of their performance gains and leaving biased attention computationally expensive. Surprisingly, despite its common usage, targeted efficiency optimization for attention with bias remains absent, which seriously hinders its application in complex tasks. Diving into the computation of FlashAttention, we prove that its optimal efficiency is determined by the rank of the attention weight matrix. Inspired by this theoretical result, this paper presents FlashBias based on the low-rank compressed sensing theory, which can provide fast-exact computation for many widely used attention biases and a fast-accurate approximation for biases in general formalizations. FlashBias can fully take advantage of the extremely optimized matrix multiplication operation in modern GPUs, achieving 1.5$\times$ speedup for Pairformer in AlphaFold 3, and over 2$\times$ speedup for attention with bias in vision and language models without loss of accuracy. Code is available at this repository: https://github.com/thuml/FlashBias.
- North America > United States > Virginia (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Asia > China (0.04)
Resilient Biosecurity in the Era of AI-Enabled Bioweapons
Feldman, Jonathan, Feldman, Tal
Recent advances in generative biology have enabled the design of novel proteins, creating significant opportunities for drug discovery while also introducing new risks, including the potential development of synthetic bioweapons. Existing biosafety measures primarily rely on inference-time filters such as sequence alignment and protein-protein interaction (PPI) prediction to detect dangerous outputs. In this study, we evaluate the performance of three leading PPI prediction tools: AlphaFold 3, AF3Complex, and SpatialPPIv2. These models were tested on well-characterized viral-host interactions, such as those involving Hepatitis B and SARS-CoV-2. Despite being trained on many of the same viruses, the models fail to detect a substantial number of known interactions. Strikingly, none of the tools successfully identify any of the four experimentally validated SARS-CoV-2 mutants with confirmed binding. These findings suggest that current predictive filters are inadequate for reliably flagging even known biological threats and are even more unlikely to detect novel ones. We argue for a shift toward response-oriented infrastructure, including rapid experimental validation, adaptable biomanufacturing, and regulatory frameworks capable of operating at the speed of AI-driven developments.
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- North America > United States > Connecticut > New Haven County > New Haven (0.04)
- Asia > Middle East > Yemen > Amanat Al Asimah > Sanaa (0.04)
From Prediction to Simulation: AlphaFold 3 as a Differentiable Framework for Structural Biology
Abbaszadeh, Alireza, Shahlaee, Armita
AlphaFold 3 represents a transformative advancement in computational biology, enhancing protein structure prediction through novel multi-scale transformer architectures, biologically informed cross-attention mechanisms, and geometry-aware optimization strategies. These innovations dramatically improve predictive accuracy and generalization across diverse protein families, surpassing previous methods. Crucially, AlphaFold 3 embodies a paradigm shift toward differentiable simulation, bridging traditional static structural modeling with dynamic molecular simulations. By reframing protein folding predictions as a differentiable process, AlphaFold 3 serves as a foundational framework for integrating deep learning with physics-based molecular
Fitness aligned structural modeling enables scalable virtual screening with AuroBind
Zhang, Zhongyue, Rao, Jiahua, Zhong, Jie, Bai, Weiqiang, Wang, Dongxue, Ning, Shaobo, Qiao, Lifeng, Xu, Sheng, Ma, Runze, Hua, Will, Chen, Jack Xiaoyu, Zhang, Odin, Lu, Wei, Feng, Hanyi, Yang, He, Shi, Xinchao, Li, Rui, Ouyang, Wanli, Ma, Xinzhu, Wang, Jiahao, Zhang, Jixian, Duan, Jia, Sun, Siqi, Zhang, Jian, Zheng, Shuangjia
Most human proteins remain undrugged, over 96% of human proteins remain unexploited by approved therapeutics. While structure-based virtual screening promises to expand the druggable proteome, existing methods lack atomic-level precision and fail to predict binding fitness, limiting translational impact. We present AuroBind, a scalable virtual screening framework that fine-tunes a custom atomic-level structural model on million-scale chemogenomic data. AuroBind integrates direct preference optimization, self-distillation from high-confidence complexes, and a teacher-student acceleration strategy to jointly predict ligand-bound structures and binding fitness. The proposed models outperform state-of-the-art models on structural and functional benchmarks while enabling 100,000-fold faster screening across ultra-large compound libraries. In a prospective screen across ten disease-relevant targets, AuroBind achieved experimental hit rates of 7-69%, with top compounds reaching sub-nanomolar to picomolar potency. For the orphan GPCRs GPR151 and GPR160, AuroBind identified both agonists and antagonists with success rates of 16-30%, and functional assays confirmed GPR160 modulation in liver and prostate cancer models. AuroBind offers a generalizable framework for structure-function learning and high-throughput molecular screening, bridging the gap between structure prediction and therapeutic discovery.
- Asia > China > Shanghai > Shanghai (0.05)
- Asia > China > Guangdong Province (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- Research Report > New Finding (0.92)
- Research Report > Experimental Study (0.92)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Pharmaceuticals & Biotechnology (1.00)
- Information Technology > Data Science (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.46)
Non-Canonical Crosslinks Confound Evolutionary Protein Structure Models
Sactipeptides--short for sulfur-to-α-carbon thioether-containing peptide--are a small but growing subclass of RiPPs natural products characterized by one or more intramolecular thioether linkages known as sactionine bonds. These unique cross-links are formed when a radical S-adenosylmethionine (rSAM) enzyme facilitates the covalent bonding of the sulfur atom of a cysteine residue to the α-carbon of another amino acid in the peptide backbone. The result is a tightly cross-linked polycyclic peptide in which the thioether bridges structure from residue to backbone imparts rigidity and extreme stability against heat, pH, and proteases (Flühe & Marahiel, 2013). While the first sactipeptide--subtilosin A, an antibiotic produced by Bacillus subtilis 168--was discovered in 1985 (Babasaki et al., 1985), this class of RiPPs is still very rare, with the pace of discovery only ramping up in recent years thanks to advances in genome mining (Chen et al., 2021; Zhong et al., 2023; Wambui et al., 2022). A literature search reveals that to date, only 10 sactipeptides have a known sequence and fully elucidated cross-links structure. Of these, only 5 sactipeptides-- ruminococcin C1 (Roblin et al., 2020), subtilosin A (Kawulka et al., 2004), thurincin H (Sit et al., 2011b), thuricin CD α, and thuricin CD β (Sit et al., 2011a)--have an experimentally resolved 3D structure available in the PDB. To the best of our knowledge, the remaining 5 sactipeptides-- huazacin (Hudson et al., 2019), hyicin 4244 (Duarte et al., 2018), sporulation killing factor A (Cao et al., 2021), streptosactin (Chen et al., 2021), and QmpA (Ali et al., 2022)--do not have a known structure. We lists these 10 sactipeptides and their post translational cross-links in table 1. Because half of these peptides are present in the PDB, and the other half have identified cross-links but not yet an experimentally resolved 3D structure, they form an ideal held-out dataset for an out-of-domain evaluation of the robustness of protein structure prediction models.
A Model-Centric Review of Deep Learning for Protein Design
Kyro, Gregory W., Qiu, Tianyin, Batista, Victor S.
Deep learning has transformed protein design, enabling accurate structure prediction, sequence optimization, and de novo protein generation. Advances in single - chain protein structure prediction via AlphaFold2, RoseTTAFold, ESM Fold, and others have achieved near - experimental accuracy, inspiring successive work extended to biomolecular complexes via AlphaFold Multimer, RoseTTAFold All - Atom, AlphaFold 3, Chai - 1, Boltz - 1 and others . Generative models such as Prot GPT 2, ProteinMPNN, and RFdiffusion have enabled sequence and backbone design beyond natural evolution - based limitations . More recently, joint sequence - structure co - design models, including ESM 3, have integrated both modalities into a unified framework, resulting in improved designability. Despite these advances, challenges still exist pertaining to modeling sequence - structure - function relationships and ensuring robust generalization beyond the regions of protein space spanned by the training data . Future advances wi ll likely focus on joint sequence - structure - function co - design frameworks that are able to model the fitness landscape more effectively than models that treat these modalities independently . Current capabilities, coupled with the dizzying rate of progress, suggest that the field will soon enable rapid, rational design of proteins with tailored structures and functions that transcend the limitations imposed by natural evolution. In this review, we discuss the current capabilities of deep learning methods for protein design, f ocusing on some of the most revolutionary and capable models with respect to their functionality and the applications that they enable, leading up to the current challenges of the field and the optimal path forward.
- North America > United States > Connecticut > New Haven County > New Haven (0.04)
- Europe > Switzerland > Ticino > Bellinzona (0.04)
- Asia > China > Beijing > Beijing (0.04)
Revealed: The best inventions of 2024 - from Tesla's futuristic Robotaxi to Huawei's tri-fold smartphone
From the steam engine in 1712 to the first ever iPhone in 2007, each year sees the birth of ever more incredible inventions. And after a year of mind-boggling tech, it's clear that 2024 has been no exception to the rule. The last 12 months have seen brilliant minds from around the world creating some mind-blowing and potentially world-changing breakthroughs. With 2024 almost at its end, MailOnline has taken a look back at some of this year's coolest gadgets and most exciting innovations. From an AI for designing proteins to a real-life pair of Wallace and Gromit's'techno trousers', these inventions are a glimpse of how we all might be living in the future. And when it comes to big breakthroughs, this year has been a resounding success for billionaire Elon Musk.
- North America > United States (0.04)
- Europe > United Kingdom (0.04)
- Asia > Singapore (0.04)
- Transportation > Ground > Road (1.00)
- Information Technology > Robotics & Automation (1.00)
- Health & Medicine (1.00)
- (2 more...)
- Information Technology > Communications > Mobile (1.00)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles (1.00)